Category-based Inductive Learning in Shared NeMuS
نویسندگان
چکیده
One of the main objectives of cognitive science is to use abstraction to create models that represent accurately the cognitive processes that constitute learning, such as categorisation. Relational knowledge is important in this task, since through the reasoning processes of induction and analogy over relations that the mind ”creates” categories (it later estabilishes causal relations between them by using induction and abduction), and analogies exemplify crucial properties of relational processing, like structure-consistent mapping[2]. Given the complexity of the task, no model today has accomplished it completely. The associacionist/connectionist approach represents those processes through associations between different informations. That is done by using artificial neural networks. However, it faces a great obstacle: the idea (called propositional fixation) that neural networks could not represent relational knowledge. A recent attempt to tackle the symbolic extraction from artificial neural networks was proposed in [1] The cognitive agent Amao uses a shared Neural Multi-Space (Shared NeMuS) of coded first-order expressions to model the various aspects of logical formulae as separate spaces, with importance vectors of different sizes. Amao [4] uses inverse unification as the generalization mechanism for learning from a set of logically connected expressions of the Herbrand Base (HB). Here we present an experiment to use such learning mechanism to model a simple version of train set from Michalski’s train problem[3].
منابع مشابه
Inductive Learning in Shared Neural Multi-Spaces
The learning of rules from examples is of continuing interest to machine learning since it allows generalization from fewer training examples. Inductive Logic Programming (ILP) generates hypothetical rules (clauses) from a knowledge base augmented with (positive and negative) examples. A successful hypothesis entails all positive examples and does not entail any negative example. The Shared Neu...
متن کاملLearning about Actions and Events in Shared NeMuS
The categorization process of information from pure data or learned in unsupervised artificial neural networks is still manual, especially in the labeling phase. Such a process is fundamental to knowledge representation [6], especially for symbol-based systems like logic, natural language processing and textual information retrieval. Unfortunately, applying categorization theory in large volume...
متن کاملCategory structure modulates interleaving and blocking advantage in inductive category acquisition
Research in inductive category learning has demonstrated that interleaving exemplars of categories results in better performance than presenting each category in a separate block. Two experiments indicate that the advantage of interleaved over blocked presentation is modulated by the structure of the categories being presented. More specifically, interleaved presentation results in better perfo...
متن کاملAn Attention-Based Model of Learning a Function and a Category in Parallel
Minda and Ross (2004) described two experiments where subjects simultaneously learned both a category and a function. They showed that when both tasks were performed in parallel on the same stimuli, the inductive bias on the categorization task–to focus on a single attribute–spread to the function learning task. Here, we present a new computational model of this phenomenon, using the ALCOVE mod...
متن کاملModel of Organization Learning in Islamic Azad University
This study aims to present a model of learning organization in Islamic Azad University. It is practical in terms of purpose and quantitative in terms of implementation. At the first step of the research, after analyzing the information, using inductive content analysis, 15 components were identified and were categorized into 5 dimensions of learning levels, systematic thinking, shared vis...
متن کامل